Heidegger and AI

The spectacular performance of ChatGPT has made it popular now to talk about “AI safety” and the potential dangers of super-intelligent machines going amok. Progress over the past year has been so rapid that many people are wondering if we should hit the pause button on further research, to give us time to figure out how to prevent these machines from going super-intelligent and potentially leaving us humans behind.

Is this a serious danger? How important is “AI Safety”? These are old questions that have been asked since computers were invented, and long before.

It’s natural for any thinking person to wonder, upon understanding computers for the first time, whether these machines are fundamentally different from us humans, and whether – maybe – we can augment or even replace our own intelligence with algorithms.

Long ago, back in the 1980s I attended a philosophy class1 in college that addressed that exact issue. The professor was an elderly Scandinavian philosopher who taught the classic Martin Heidegger textbook Being and Time to a class of interested computer scientists. Hubert Dreyfus, the UC-Berkeley expert attended as a guest. Dreyfus, famous for his book What Computers Can’t Do2 was there as a foil for those of us who had a more optimistic view of the technology, and I remember leaving the class with a feeling that Dreyfus simply underestimated the power of algorithms. It’s just a matter of time before we prove him wrong, I thought.

Well now most of a lifetime later, despite the great progress in AI, I’m more sympathetic to Dreyfus’ position. It’s not that computers can’t do many or most of the things we associate with human intelligence; rather, to be human is fundamentally tied to our relationships and social interactions. We will never afford non-humans the characteristics we associate with human intelligence because human intelligence is about far more than step-by-step reasoning or even ChatGPT-style “natural language” dialog. To act like a human requires you to be a human.

Heidegger was among the first to identify a crucial reason humans are special. Western philosophy, he says, incorrectly divides the world into subjects and objects, an misleading distraction from what we really are. We are not autonomous “thinking” machines with bodies and brains made of atoms that are distinct from the other piles of atoms that make up objects in the world. Rather, we are what he calls Dasein, a type of being that requires an entire book to describe.

Heidegger is often criticized for his impenetrable jargon (though supposedly it makes more sense in its original German), but his many neologisms are chosen deliberately to force us into thinking in the different way he wants us to conceptualize the world and our relationships.

His argument requires a deep look at the concept of being. What does it mean for something to be? When we say that “this is” or “that is”, we make an implicit assumption about a world divided into objects, of which we humans are just one of many. Heidegger wants us to think of being in another sense.

Like much of Heideggerian philosophy, it’s hard to describe any piece of it without understanding the whole. But here’s a taste:

Most of us intuitively think of reality as a large collection of objects that includes people as well as everyday items like hammers. When you need to drive a nail, an object called “you” picks up the object we call a “hammer” and begins to pound.

Heidegger says that’s the wrong way to think about it. We don’t pick up hammers and begin to pound for no reason. What we do is pound nails. Actually, that’s not quite right: we put boards together. No, that’s not it either. We build houses. That’s closer, but still not quite what we mean. What’s actually happening is that we’re building shelters so people can stay warm … so they can survive … so they can do other things … and on and on. Everything is connected with reasons and goals that point toward different reasons and goals. You are not simply pounding a hammer – you’re contributing to a whole world of happenings.

In fact, most of the time you don’t even notice the hammer. It’s what Heidegger calls ready-to-hand. There’s not a program in your head that thinks step-by-step about hammering: move this muscle there, drop your hand here. No, you’re not really conscious of holding that hammer in the first place: you’re pounding nails, connecting boards, building houses, and so on. In that sense, what’s you is better described as something more abstract – a Dasein, a reference to that aspect of being that is generally not even thinking about itself or its actions. Dasein is the real you; you’re not the pile of atoms you see when you look in the mirror. The real you is what Heidegger calls being-in-the-world – part of an expansive network of reasons for being that includes other Daseins.

It’s only when, for some reason, that hammer is no longer able to fulfill its function that you think of it as an object at all. If the handle breaks, for example, it’s no longer ready-to-hand. Instead, it becomes merely present-at-hand. It’s then what Heidegger calls an entity, something close (but not quite) what we mean when we say “object” or “thing”.

in Heidegger’s conception, entities aren’t particularly interesting. They’re a sideshow. They don’t even exist, not in the same sense that we human beings exist.

Importantly, that broken hammer only attracts your attention because it takes away from that whole chain of purposes that got you to hammering in the first place. And it’s only now that it has your attention as an entity that something like science becomes useful. Science is a rigorous process of studying entities.


For much of the first half-century of artificial intelligence research, computer scientists assumed human behavior could be described with step-by-step instructions (algorithms) just like any other software system. To build a robot that can hammer a nail, for example, start with cameras that can calculate distances. Add motors that move the hammer precisely in the direction of the nail. Pound repeatedly until the nail is fully in the board.

That turned out to be much harder than it looks and eventually the serious robotic researchers gave up on step-by-step algorithms. What worked much better – the technique used today in so-called neural networks –involves statistical calculations based on large datasets of actual hammers driving actual nails (or, in the case of ChatGPT, large datasets of written texts).

Although neural networks more closely resemble what humans are doing, there are many crucial differences that make it unlikely – Heidegger would say impossible – to reliably function at the level of any human five-year-old.

ChatGPT users know about “hallucinations”, the tendency of these models to make stuff up. Humans do that too, of course, whether through overconfidence or ignorance. But we don’t tend to make mistakes with basic arithmetic, or violate the laws of physics. Even toddlers know enough about human interaction to tell when somebody is teasing them, or that water volume remains the same when poured from a tall thin cup into a small fat one.

More importantly, we humans know about others like us. We know the appropriate distance to keep when standing next to somebody in an elevator. We know that it’s not okay to lie down on the supermarket floor and take a nap. These are not skills that we were taught. We picked them up from others, and we can become confused and disoriented when thrown into cultures where these customs are different.

Even something like science, that tries to be extra rigorous, depends on tacit assumptions that seem so obvious as to be unworthy of discussion. It’s the reason it’s so difficult to replicate the success of one lab in another; no matter how thoroughly the lab’s procedures are documented, it always seems like you exposure to the original people in order to truly recreate the results.


The Cartesian idea of a subject (me) manipulating objects (things) is so embedded in how modern people think that you can see it in almost every discussion of AI. Whenever people compare the mind to a computer – a physical device that processes sensory inputs – they’re implicitly accepting Descartes’ assumption that we humans are a “thing”, an entity like rocks or pencils, made of atoms.

But we – Dasein – are not atoms.

References and Resources

Footnotes

  1. Philosophy 133, taught by Dagfinn Føllesdal, an Emeritus Professor at Stanford who is still alive in 2024. I took the class over the summer of 1985↩︎

  2. see What AI Can’t Do ↩︎